SwePub
Tyck till om SwePub Sök här!
Sök i SwePub databas

  Extended search

Träfflista för sökning "db:Swepub ;pers:(Jantsch Axel);pers:(Millberg Mikael)"

Search: db:Swepub > Jantsch Axel > Millberg Mikael

  • Result 1-10 of 17
Sort/group result
   
EnumerationReferenceCoverFind
1.
  •  
2.
  • Jantsch, Axel, et al. (author)
  • Networks on Chip
  • 2001
  • In: Workshop at the European Solid State Circuits Conference.
  • Conference paper (peer-reviewed)
  •  
3.
  • Kumar, Shashi, et al. (author)
  • A network on chip architecture and design methodology
  • 2002
  • In: VLSI 2002. - : IEEE conference proceedings. - 0769514863 ; , s. 105-112
  • Conference paper (peer-reviewed)abstract
    • We propose a packet switched platform for single chip systems which scales well to an arbitrary number of processor like resources. The platform, which we call Network-on-Chip (NOC), includes both the architecture and the design methodology. The NOC architecture is a m x n mesh of switches and resources are placed on the slots formed by the switches. We assume a direct layout of the 2-D mesh of switches and resources providing physical- architectural level design integration. Each switch is connected to one resource and four neighboring switches, and each resource is connected to one switch. A resource can be a processor core, memory, an FPGA, a custom hardware block or any other intellectual property (LP) block, which fits into the available slot and complies with the interface of the NOC. The NOC architecture essentially is the onchip communication infrastructure comprising the physical layer, the data link layer and the network layer of the OSI protocol stack. We define the concept of a region, which occupies an area of any number of resources and switches. This concept allows the NOC to accommodate large resources such as large memory banks, FPGA areas, or special purpose computation resources such as high performance multiprocessors. The NOC design methodology consists of two phases. In the first phase a concrete architecture is derived from the general NOC template. The concrete architecture defines the number of switches and shape of the network, the kind and shape of regions and the number and kind of resources. The second phase maps the application onto the concrete architecture to form a concrete product.
  •  
4.
  • Lu, Zhonghai, et al. (author)
  • Flow Regulation for On-Chip Communication
  • 2009
  • In: DATE. - 9781424437818 ; , s. 578-581
  • Conference paper (peer-reviewed)abstract
    • We propose (sigma, rho)-based flow regulation as a design instrument for System-on-Chip (SoC) architects to control quality-of-service and achieve cost-effective communication, where sigma bounds the traffic burstiness and rho the traffic rate. This regulation changes the burstiness and timing of traffic flows, and can be used to decrease delay and reduce buffer requirements in the SoC infrastructure. In this paper, we define and analyze the regulation spectrum, which bounds the upper and lower limits of regulation. Experiments on a Network-on-Chip (NoC) with guaranteed service demonstrate the benefits of regulation We conclude that flow regulation may exert significant positive impact on communication performance and buffer requirements.
  •  
5.
  • Lu, Zhonghai, et al. (author)
  • NNSE: Nostrum Network-on-Chip Simulation Environment
  • 2005
  • In: Proceedings of Swedish System-on-Chip Conference, Stockholm, Sweden, April 2005..
  • Conference paper (other academic/artistic)abstract
    • A main challenge for Network-on-Chip (NoC) design isto select a network architecture that suits a particular application.NNSE enables to analyze the performance impactof NoC configuration parameters. It allows one to(1) configure a network with respect to topology, flow controland routing algorithm etc.; (2) configure various regularand application specific traffic patterns; (3) evaluatethe network with the traffic patterns in terms of latency and throughput.
  •  
6.
  • Millberg, Mikael, et al. (author)
  • A study of NoC Exit Strategies
  • 2007
  • In: NOCS 2007. - 9780769527734 ; , s. 217-217
  • Conference paper (peer-reviewed)abstract
    • The throughput of a network is limited due to several interacting components. Analysing simulation results made it clear that the component that was worth attacking was the exit bandwidth between the network and the connected resources. The obvious approach is to increase this bandwidth; the benefit is a higher throughput of the network and a significant lowering of the buffer requirements at the entry points of the network this because worst case scenarios now happens at a higher injection rate. The result we present shows significant differences in throughput as well as in average and worst case latency.
  •  
7.
  • Millberg, Mikael, 1973- (author)
  • Architectural Techniques for Improving Performance in Networks on Chip
  • 2011
  • Doctoral thesis (other academic/artistic)abstract
    • The main aim of this thesis is to propose enhancing techniques for the performance in Networks on Chips. In addition, a concrete proposal for a protocol stack within our NoC platform Nostrum is presented. Nostrum inherently supports both Best Effort as well as Guaranteed Throughput traffic delivery. It employs a deflective routing scheme for best effort traffic delivery that gives a small footprint of the switches in combination with robustness to disturbances in the network. For the traffic delivery with hard guarantees a TDMA based scheme is used. During the transmission process in a NoC several stages are involved. In the papers included, I propose a set of strategies to enhance the performance in several of these stages. The strategies are summarised as follows Temporally Disjoint Networks is that a physical network, potentially, can be seen to contain a set of separate networks that a packet can enter dependenton when it enters the physical network. This has the consequence that wecould have different traffic types in the different networks. Looped containers provide means to set up virtual circuits in networksusing deflective routing. High priority container packets are inserted intothe network to follow a predefined, closed, route between source and destination.At sender side the packets are loaded and sent to the destination where it is unloaded and sent back. Proximity Congestion Awareness reduces the load of the network by diverting packets away from congested areas. It can increase the maximum trafficload by a factor of 20. Dual Packet Exit increases the exit bandwidth of the network leading to a50 percent reduction in worst-case latency and a 30 percent reduction inaverage latency as well as a lowered buffer usage. Priority Based Forced Requeue prematurely lifts out low priority packetsfrom the network to be requeued. Packets that have not yet entered the network compete with packets inside the network which gives tighter boundson admission with a reduction of worst case latencies by 50 percent. Furthermore, Operational Efficiency is proposed as a measure to quantifyhow effective a network is and is defined as the throughput per buffers used in the system. An increase of the injection of packets into the network to increase the system throughput will have a cost associated to it and can be optimised to save energy.
  •  
8.
  • Millberg, Mikael, et al. (author)
  • Guaranteed bandwidth using looped containers in temporally disjoint networks within the Nostrum network on chip
  • 2004
  • In: Design, Automation And Test In Europe Conference And Exhibition, Vols 1 And 2, Proceedings. - LOS ALAMITOS, USA : IEEE COMPUTER SOC. - 0769520855 ; , s. 890-895
  • Conference paper (peer-reviewed)abstract
    • In today's emerging Network-on-Chips, there is a need for different traffic classes with different Quality-of-Service guarantees. Within our NoC architecture Nostrum, we have implemented a service of Guaranteed Bandwidth (GB), and latency, in addition to the already existing service of Best-Effort (BE) packet delivery. The guaranteed bandwidth is accessed via Virtual Circuits (VC). The vcs are implemented using a combination of two concepts that we call 'Looped Containers' and 'Temporally Disjoint Networks'. The Looped Containers are used to guarantee access to the network - independently of the current network load without dropping packets; and the TDNS are used in order to achieve several VCs, plus ordinary BE traffic, in the network. The TDNS are a consequence of the deflective routing policy used, and gives rise to an explicit time-division-multiplexing within the network. To prove our concept an HDL implementation has been synthesised and simulated. The cost in terms of additional hardware needed, as well as additional bandwidth is very low - less than 2 percent in both cases! Simulations showed that ordinary BE traffic is practically unaffected by the VCs.
  •  
9.
  •  
10.
  • Millberg, Mikael, et al. (author)
  • Increasing NoC performance and utilisation using a Dual Packet Exit strategy
  • 2007
  • In: DSD 2007. - LOS ALAMITOS : IEEE COMPUTER SOC. - 9780769529783 ; , s. 511-518
  • Conference paper (peer-reviewed)abstract
    • When designing a network the use of buffers is inevitable. Buffers are used at the entry point, inside and at the exits of the network. The usage of these buffers significantly changes the performance of the system. as a whole. In order to enhance the buffer utilisation the concept of letting more than one packet exit the network at every switch each clock cycle is introduced - Dual Packet Exit (DPE). The approach is tried on a 4x4 and a 6x6 mesh. We demonstrate the buffers used in combination with different routing strategies for best effort performance. The result we present shows a 50% reduction in terms of worst case latency and a 30% reduction in terms of average latency as well as an increased throughput both from a system and network perspective. We define the term Operational Efficiency as a measure of the network efficiency and show that it increases by roughly 20 % with the DPE technique.
  •  
Skapa referenser, mejla, bekava och länka
  • Result 1-10 of 17

Kungliga biblioteket hanterar dina personuppgifter i enlighet med EU:s dataskyddsförordning (2018), GDPR. Läs mer om hur det funkar här.
Så här hanterar KB dina uppgifter vid användning av denna tjänst.

 
pil uppåt Close

Copy and save the link in order to return to this view